Goto

Collaborating Authors

 responsible ai framework


Perceptions of Agentic AI in Organizations: Implications for Responsible AI and ROI

Ackerman, Lee

arXiv.org Artificial Intelligence

As artificial intelligence (AI) systems rapidly gain autonomy, the need for robust responsible AI frameworks becomes paramount. This paper investigates how organizations perceive and adapt such frameworks amidst the emerging landscape of increasingly sophisticated agentic AI. Employing an interpretive qualitative approach, the study explores the lived experiences of AI professionals. Findings highlight that the inherent complexity of agentic AI systems and their responsible implementation, rooted in the intricate interconnectedness of responsible AI dimensions and the thematic framework (an analytical structure developed from the data), combined with the novelty of agentic AI, contribute to significant challenges in organizational adaptation, characterized by knowledge gaps, a limited emphasis on stakeholder engagement, and a strong focus on control. These factors, by hindering effective adaptation and implementation, ultimately compromise the potential for responsible AI and the realization of ROI.


A Rapid Review of Responsible AI frameworks: How to guide the development of ethical AI

Barletta, Vita Santa, Caivano, Danilo, Gigante, Domenico, Ragone, Azzurra

arXiv.org Artificial Intelligence

In the last years, the raise of Artificial Intelligence (AI), and its pervasiveness in our lives, has sparked a flourishing debate about the ethical principles that should lead its implementation and use in society. Driven by these concerns, we conduct a rapid review of several frameworks providing principles, guidelines, and/or tools to help practitioners in the development and deployment of Responsible AI (RAI) applications. We map each framework w.r.t. the different Software Development Life Cycle (SDLC) phases discovering that most of these frameworks fall just in the Requirements Elicitation phase, leaving the other phases uncovered. Very few of these frameworks offer supporting tools for practitioners, and they are mainly provided by private companies. Our results reveal that there is not a "catching-all" framework supporting both technical and non-technical stakeholders in the implementation of real-world projects. Our findings highlight the lack of a comprehensive framework encompassing all RAI principles and all (SDLC) phases that could be navigated by users with different skill sets and with different goals.


Back to Basics: Revisiting the Responsible AI Framework

#artificialintelligence

In the last few months we have seen promising developments in establishing safeguards for AI. This includes a landmark EU regulation proposal on AI that prohibits unacceptable AI uses and imposes mandatory disclosures and evaluations for high-risk systems, an algorithmic transparency standard launched by the UK government, mandatory audits for AI hiring tech in New York City, and a draft AI Risk Assessment Framework developed by NIST at the request of US congress, to name a few. That being said, we are still in the early days of AI regulation. There is a long road ahead to minimize harms that algorithmic systems can cause. In this article series, I explore different topics related to the responsible use of AI and its societal implications.


Council Post: How To Build Responsible AI, Step 1: Accountability

#artificialintelligence

The development, deployment and operation of irresponsible AI has done, and will continue to do, significant damage to individuals, business, markets, societies and economies of every scale. Now is the time to be explicit in the processes and systems that we create. In a series of articles, I will explore each one of these elements and its crucial role in building the responsible AI of the future. The first component of responsible AI that I will address in this second article in the series is accountability, which is especially important in areas such as supply chain, finance, national security and intelligence, cyberbalkanization, data protection, data destruction and data/algorithm aggregations. Rather than assume we all mean the same thing when we use the term "accountability," I'll now suggest three critical features for how we distill the term and understand it beyond its etymology.


3 components CIOs need to create an ethical AI framework

#artificialintelligence

Only 20% of companies report having an ethical artificial intelligence framework in place and just 35% have plans to improve governance of AI systems and processes in 2021, according to PwC data. I believe every CIO needs a responsible AI plan before implementing the technology. Businesses shouldn't wait for this to be mandatory. It doesn't matter if the CIO is buying the technology or building it. AI as a technology is neutral -- it is not inherently ethical or unethical.


New Cybersecurity Tools and Techniques are Needed to Protect AI

#artificialintelligence

Artificial intelligence is reorganizing the world, introducing innovations that will likely exceed those that came with the World Wide Web. And like the Web there were, and still are, security concerns. Today, trust in artificial intelligence is probably the single greatest risk to continuing AI innovation and adoption. A simple framework for the operational elements that need to be addressed for the responsible deployment of artificial intelligence must include'Fairness and Bias', 'Interpretability and Explainability', as well as the newest and equally important element of'Robustness and Security'. Note, that these are operational considerations, and privacy should be inherent in the design and implementation of responsible AI -- e.g., privacy must be foundational in every step of the process.


Council Post: How To Build Responsible AI, Step 1: Accountability

#artificialintelligence

The development, deployment and operation of irresponsible AI has done, and will continue to do, significant damage to individuals, business, markets, societies and economies of every scale. Now is the time to be explicit in the processes and systems that we create. In a series of articles, I will explore each one of these elements and its crucial role in building the responsible AI of the future. The first component of responsible AI that I will address in this second article in the series is accountability, which is especially important in areas such as supply chain, finance, national security and intelligence, cyberbalkanization, data protection, data destruction and data/algorithm aggregations. Rather than assume we all mean the same thing when we use the term "accountability," I'll now suggest three critical features for how we distill the term and understand it beyond its etymology.


Artificial intelligence is coming. Is your business ready?

#artificialintelligence

CEOs see the potential--and the risks. Our strategist's guide and responsible AI framework can set companies on the right path. As machines continue to assume many of the tasks that once depended on human agency--such as driving a car or making a decision--we find ourselves looking toward a future in which nearly all technology applications will likely incorporate some form of artificial intelligence (AI). It has already changed the trajectory of scientific research, enabling new creative and technological milestones. At the same time, the rapid advancement of AI technologies has raised fundamental questions about social values and the potential unintended consequences of AI.